Goto

Collaborating Authors

 optimal control





Discrete Adjoint Schrödinger Bridge Sampler

Guo, Wei, Zhu, Yuchen, Du, Xiaochen, Nam, Juno, Chen, Yongxin, Gómez-Bombarelli, Rafael, Liu, Guan-Horng, Tao, Molei, Choi, Jaemoo

arXiv.org Machine Learning

Learning discrete neural samplers is challenging due to the lack of gradients and combinatorial complexity. While stochastic optimal control (SOC) and Schrödinger bridge (SB) provide principled solutions, efficient SOC solvers like adjoint matching (AM), which excel in continuous domains, remain unexplored for discrete spaces. We bridge this gap by revealing that the core mechanism of AM is $\mathit{state}\text{-}\mathit{space~agnostic}$, and introduce $\mathbf{discrete~ASBS}$, a unified framework that extends AM and adjoint Schrödinger bridge sampler (ASBS) to discrete spaces. Theoretically, we analyze the optimality conditions of the discrete SB problem and its connection to SOC, identifying a necessary cyclic group structure on the state space to enable this extension. Empirically, discrete ASBS achieves competitive sample quality with significant advantages in training efficiency and scalability.






IsL2Physics-InformedLossAlwaysSuitablefor TrainingPhysics-InformedNeuralNetwork?

Neural Information Processing Systems

In particular, we leverage the concept of stability in the literature of partial differential equation tostudy the asymptotic behavior ofthe learned solution asthe loss approaches zero. Withthis concept, we study animportant class of high-dimensional non-linear PDEs in optimal control, the Hamilton-JacobiBellman (HJB) Equation, and provethat for generalLp Physics-Informed Loss, a wide class of HJB equation is stable only ifp is sufficiently large.